Skip to content

feat(openclaw): add Ollama provider with GLM-4 and Llama 3.2#63

Open
Castrozan wants to merge 1 commit intomainfrom
feat/ollama-integration
Open

feat(openclaw): add Ollama provider with GLM-4 and Llama 3.2#63
Castrozan wants to merge 1 commit intomainfrom
feat/ollama-integration

Conversation

@Castrozan
Copy link
Owner

What

Integrates local Ollama inference into OpenClaw via configPatches in openclaw.nix.

Changes

  • Provider: http://localhost:11434 with native ollama API type
  • Models:
    • glm4 (GLM-4, 5.5 GB) — alias /model glm
    • llama3.2 (Llama 3.2 3B, 2 GB) — alias /model llama
  • Both models already pulled and tested locally

Usage

/model glm    → switch to GLM-4 (local)
/model llama  → switch to Llama 3.2 (local)
/model sonnet → back to Claude Sonnet

Notes

  • Ollama systemd user service is already in home/modules/ollama/default.nix
  • Run systemctl --user start ollama until next rebuild makes it persistent
  • Zero cost, fully offline, no API key required

Integrates local Ollama inference into OpenClaw via configPatches:
- Provider: http://localhost:11434 with native ollama API
- Models: glm4 (5.5 GB) and llama3.2 (2 GB), both already pulled
- Aliases: /model glm and /model llama for quick switching

Ollama systemd user service is managed by home/modules/ollama.
Run 'systemctl --user start ollama' until next rebuild makes it persistent.
@Castrozan Castrozan force-pushed the main branch 3 times, most recently from 6227bc7 to b84e24d Compare March 11, 2026 01:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant